-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: stream resources to files #314
Conversation
for instance in instances { | ||
ddb_resources.push(instance); | ||
} | ||
process_table_resources( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here we could probably just launch the task in the background, rather than awaiting?
momento/Cargo.toml
Outdated
@@ -25,6 +25,8 @@ aws-sdk-dynamodb = "1.19.0" | |||
aws-sdk-elasticache = "1.18.0" | |||
indicatif = "0.17.8" | |||
flate2 = "1.0.28" | |||
json-writer = "0.3.0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
json-writer = "0.3.0" |
|
||
for resource in resources { | ||
match resource { | ||
Resource::DynamoDb(_) => todo!(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
que?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
O yea this is a case that should never happen since we are only processing elasticache resources. I can clear this up and just continue
if this happens
json_writer.finish_document()?; | ||
|
||
// now we compress the json into a .gz file for the customer to upload | ||
let opened_file_tokio = File::open(file_path).await?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a silly nit but i don't understand why these file objects all have tokio
in their var name :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tokio has its own version of all the file system stuff. I could not get the json streaming library to work with the tokio versions of files, since they don't implement std::io::write
trait they instead implement the tokio version of this trait. There may be an easy work around for this, but I time boxed it and could not get it to work. So here we read the files using tokio, and then we convert them to std
files so we can stream the json data to them
Co-authored-by: Chris Price <[email protected]>
sender | ||
.send(Resource::DynamoDb(resource)) | ||
.await | ||
.expect("TODO: panic message"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you sent me a DM about this LOC the other day. I think I understand the reason it won't compile with a ?
. lmk if you want to discuss/pair on it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM unless we want to get rid of the todos.
This pr updates the cloud linter command to be able to stream resources to a file, rather than keep everything in memory, before writing to a file
This main difference in this pr is that the entry point just calls
process_resource_<resource_name>
, and then the resource is in charge of getting all metadata/metrics, and then streaming the results to a file. Previously, theprocess_resource
function would return the resources for further processing